Skip to content

Conversation

sestinj
Copy link
Contributor

@sestinj sestinj commented Sep 22, 2025

Summary

  • Added token usage tracking to the OpenAI adapter to match the existing implementations in Anthropic and Gemini adapters
  • Modified the streaming response handler to properly collect and emit usage information
  • Verified all three providers (OpenAI, Anthropic, Gemini) now consistently track and report token usage

Changes

  • OpenAI Adapter: Updated chatCompletionStream method to handle usage chunks that arrive at the end of the stream
  • Tests: Added comprehensive test coverage for token usage tracking across all three providers
  • Verification: All existing tests pass with the expectUsage: true flag

Test Plan

  • Run existing test suite with API keys
  • Verify OpenAI adapter properly tracks usage in streaming responses
  • Verify Anthropic adapter continues to track usage correctly
  • Verify Gemini adapter continues to track usage correctly
  • Add unit tests for token usage tracking

Linear Issue

CON-3935

🤖 Generated with Claude Code


Summary by cubic

Adds token usage tracking to the OpenAI adapter and defers the usage event until after all streamed content. Aligns OpenAI with Anthropic and Gemini so all providers report prompt, completion, and total tokens (CON-3935).

  • New Features
    • Update OpenAI chatCompletionStream to buffer the usage chunk and emit it last; non-streaming passes usage through as-is.
    • Add tests for usage tracking across OpenAI, Anthropic, and Gemini (streaming and non-streaming).

- Modified OpenAI adapter to properly handle and emit usage chunks in streaming responses
- Added logic to store usage chunks and emit them at the end of the stream
- Verified Anthropic and Gemini adapters already have complete token usage implementations
- Added comprehensive tests for token usage tracking across all three providers
- All tests passing with provided API keys

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <[email protected]>
@sestinj sestinj marked this pull request as ready for review September 29, 2025 18:40
@sestinj sestinj requested a review from a team as a code owner September 29, 2025 18:40
@sestinj sestinj requested review from Patrick-Erichsen and removed request for a team September 29, 2025 18:40
Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1 issue found across 3 files

Prompt for AI agents (all 1 issues)

Understand the root cause of the following 1 issues and fix them.


<file name="packages/openai-adapters/src/test/token-usage.test.ts">

<violation number="1" location="packages/openai-adapters/src/test/token-usage.test.ts:115">
Overwriting `global.fetch` without restoring leaves the mock active for subsequent tests. Please store the original fetch and restore it in afterEach/afterAll (or use `vi.spyOn`) so other suites keep the real implementation.</violation>
</file>

React with 👍 or 👎 to teach cubic. Mention @cubic-dev-ai to give feedback, ask questions, or re-run the review.

}),
};

global.fetch = vi.fn().mockResolvedValue(mockResponse);
Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot Sep 29, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overwriting global.fetch without restoring leaves the mock active for subsequent tests. Please store the original fetch and restore it in afterEach/afterAll (or use vi.spyOn) so other suites keep the real implementation.

Prompt for AI agents
Address the following comment on packages/openai-adapters/src/test/token-usage.test.ts at line 115:

<comment>Overwriting `global.fetch` without restoring leaves the mock active for subsequent tests. Please store the original fetch and restore it in afterEach/afterAll (or use `vi.spyOn`) so other suites keep the real implementation.</comment>

<file context>
@@ -0,0 +1,353 @@
+      }),
+    };
+
+    global.fetch = vi.fn().mockResolvedValue(mockResponse);
+
+    const api = new AnthropicApi({ apiKey: &quot;test&quot;, provider: &quot;anthropic&quot; });
</file context>

✅ Addressed in 6febaa7

@dosubot dosubot bot added the size:L This PR changes 100-499 lines, ignoring generated files. label Sep 29, 2025
@github-project-automation github-project-automation bot moved this from Todo to In Progress in Issues and PRs Oct 1, 2025
@dosubot dosubot bot added the lgtm This PR has been approved by a maintainer label Oct 1, 2025
@dosubot dosubot bot added size:M This PR changes 30-99 lines, ignoring generated files. and removed size:L This PR changes 100-499 lines, ignoring generated files. labels Oct 12, 2025
@sestinj sestinj merged commit 0a5acb1 into main Oct 12, 2025
54 of 55 checks passed
@sestinj sestinj deleted the nate/con-3935 branch October 12, 2025 23:52
@github-project-automation github-project-automation bot moved this from In Progress to Done in Issues and PRs Oct 12, 2025
@github-actions github-actions bot locked and limited conversation to collaborators Oct 12, 2025
@github-actions github-actions bot added the tier 2 Important feature that adds new capabilities to the platform or improves critical user journeys label Oct 12, 2025
@sestinj
Copy link
Contributor Author

sestinj commented Oct 12, 2025

🎉 This PR is included in version 1.24.0 🎉

The release is available on:

Your semantic-release bot 📦🚀

@sestinj
Copy link
Contributor Author

sestinj commented Oct 14, 2025

🎉 This PR is included in version 1.28.0 🎉

The release is available on:

Your semantic-release bot 📦🚀

@sestinj
Copy link
Contributor Author

sestinj commented Oct 15, 2025

🎉 This PR is included in version 1.2.0 🎉

The release is available on:

Your semantic-release bot 📦🚀

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

lgtm This PR has been approved by a maintainer released size:M This PR changes 30-99 lines, ignoring generated files. tier 2 Important feature that adds new capabilities to the platform or improves critical user journeys

Projects

Status: Done

Development

Successfully merging this pull request may close these issues.

2 participants